|
In computer networking, a reliable protocol provides reliability properties with respect to the delivery of data to the intended recipient(s), as opposed to an unreliable protocol, which does not provide notifications to the sender as to the delivery of transmitted data. The term "reliable" is a synonym for assured, which is the term used by the ITU and ATM Forum in the context of the ATM Service-Specific Coordination Function, for example for transparent assured delivery with AAL5.〔Young-ki Hwang, et al., ''Service Specific Coordination Function for Transparent Assured Delivery with AAL5 (SSCF-TADAS)'', Military Communications Conference Proceedings, 1999. MILCOM 1999, vol.2, pages 878 - 882, DOI: 10.1109/MILCOM.1999.821329〕〔ATM Forum, The User Network Interface (UNI), v. 3.1, ISBN 0-13-393828-X, Prentice Hall PTR, 1995.〕〔ITU-T, ''B-ISDN ATM Adaptation Layer specification: Type 5 AAL'', Recommendation I.363.5, International Telecommunication Union, 1998.〕 Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols. TCP, the main protocol used on the Internet, is a reliable unicast protocol. UDP, often used in computer games or in other situations where speed is an issue and the loss of a little data is not as important because of the transitory nature of the data, is an unreliable protocol. Often, a reliable unicast protocol is also connection-oriented. For example, TCP is connection-oriented, with the virtual-circuit ID consisting of source and destination IP addresses and port numbers. Some unreliable protocols are connection-oriented as well. These include ATM and frame relay. There are also reliable connectionless protocols, such as AX.25 when it passes data in I-frames. But this combination occurs rarely: reliable-connectionless is uncommon in commercial and academic networks. ==History== When the ARPANET pioneered packet switching, it provided a reliable packet delivery procedure to its connected hosts via its 1822 interface. A host computer simply arranged the data in the correct packet format, inserted the address of the destination host computer, and sent the message across the interface to its connected Interface Message Processor. Once the message was delivered to the destination host, an acknowledgement was delivered to the sending host. If the network could not deliver the message, it would send an error message back to the sending host. Meanwhile, the developers of CYCLADES and of ALOHAnet demonstrated that it was possible to build an effective computer network without providing reliable packet transmission. This lesson was later embraced by the designers of Ethernet. If a network does not guarantee packet delivery, then it becomes the host's responsibility to provide reliability by detecting and retransmitting lost packets. Subsequent experience on the ARPANET indicated that the network itself could not reliably detect all packet delivery failures, and this pushed responsibility for error detection onto the sending host in any case. This led to the development of the end-to-end principle, which is one of the Internet's fundamental design assumptions. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Reliability (computer networking)」の詳細全文を読む スポンサード リンク
|